Goto

Collaborating Authors

 Santiago Metropolitan Region


Dynamic Revenue Sharing

Neural Information Processing Systems

Many online platforms act as intermediaries between a seller and a set of buyers. Examples of such settings include online retailers (such as Ebay) selling items on behalf of sellers to buyers, or advertising exchanges (such as AdX) selling pageviews on behalf of publishers to advertisers. In such settings, revenue sharing is a central part of running such a marketplace for the intermediary, and fixed-percentage revenue sharing schemes are often used to split the revenue among the platform and the sellers. In particular, such revenue sharing schemes require the platform to (i) take at most a constant fraction \alpha of the revenue from auctions and (ii) pay the seller at least the seller declared opportunity cost c for each item sold. A straightforward way to satisfy the constraints is to set a reserve price at c / (1 - \alpha) for each item, but it is not the optimal solution on maximizing the profit of the intermediary.

  Country: South America > Chile > Santiago Metropolitan Region > Santiago Province > Santiago (0.07)
  Industry: Retail (0.59)



Towards Federated Foundation Models: Scalable Dataset Pipelines for Group-Structured Learning Zachary Charles

Neural Information Processing Systems

We introduce Dataset Grouper, a library to create large-scale group-structured (e.g., federated) datasets, enabling federated learning simulation at the scale of foundation models. This library facilitates the creation of group-structured versions of existing datasets based on user-specified partitions, and directly leads to a variety of useful heterogeneous datasets that can be plugged into existing software frameworks. Dataset Grouper offers three key advantages. First, it scales to settings where even a single group's dataset is too large to fit in memory. Second, it provides flexibility, both in choosing the base (non-partitioned) dataset and in defining partitions.



Learning Distributedand Fair Policiesfor Network Load Balancingas Markov Potential Game

Neural Information Processing Systems

At t 2 H inahorizonH ofthegireceiwi(t) 2 W, theworkload policy i 2 , where istheload t, a anactionai(t)= {aij(t)}Nj=1, accordingwi(t) are i(t). Q (o, a) r(o, a) Eo0[V (o0)] 2 , whereV (o0)= Ea0[Q (o0,a0) log (a0|o0)] and Q isthetargetQ network; theactorpolicy isupdatedwiththegradient r Eo[Ea [ log (a|o) Q (o, a)]].